Skip to content

Conversation

@pavanimajety
Copy link
Collaborator

@pavanimajety pavanimajety commented Jan 28, 2025

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@mergify
Copy link

mergify bot commented Jan 31, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @pavanimajety.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Jan 31, 2025
@pavanimajety pavanimajety changed the title [Kernel] Add ModelOpt NVFP4 Checkpoint Support [Kernel] Add ModelOpt FP4 Checkpoint Support Feb 27, 2025
@mergify mergify bot removed the needs-rebase label Feb 27, 2025
@mgoin mgoin self-assigned this Feb 27, 2025
@pavanimajety pavanimajety force-pushed the modelopt-ckpt-nvfp4 branch 2 times, most recently from 1aaebfb to dd4fad2 Compare March 7, 2025 23:49
@pavanimajety pavanimajety marked this pull request as ready for review March 9, 2025 20:53
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
@pavanimajety pavanimajety force-pushed the modelopt-ckpt-nvfp4 branch from dd4fad2 to bc07ee7 Compare March 9, 2025 21:59
Copy link
Member

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, just a few comments thanks!

Comment on lines 226 to 233
def get_quant_method(self, layer: torch.nn.Module,
prefix: str) -> Optional["QuantizeMethodBase"]:
from vllm.attention.layer import Attention # Avoid circular import
if isinstance(layer, LinearBase):
return ModelOptNvFp4LinearMethod(self)
elif isinstance(layer, Attention):
return ModelOptFp8KVCacheMethod(self)
return None
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like we are missing the check between prefix and exclude_modules to ignore layers, as noted in the model config https://huggingface.co/nvidia/DeepSeek-R1-FP4/blob/761db7ea1b0b750e29fa78ebd2ce449e619809c5/hf_quant_config.json#L10-L15

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DeepseekR1-FP4 doesn't work with this PR(We are missing support for FP4 Group Gemm). I added back the exclude_modules and now load the linear layer classes accordingly. The actual correctness will be checked after adding support for the R1-FP4 model in a future PR.

Comment on lines +381 to +384
# for input only the contracting dimension has a constraint.
x_m, _ = x.shape
w_n, _ = layer.weight.shape
output_shape = [x_m, w_n]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the constraint? There seems to be none

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing this comment since we explicitly check the correctness of x's shape in scaled_fp4_quant. The constraint is that the innermost dimension must be divisible by block_size.

Signed-off-by: Pavani Majety <pmajety@nvidia.com>
@pavanimajety
Copy link
Collaborator Author

Thank you for your review, @mgoin!

@pavanimajety pavanimajety requested a review from mgoin March 10, 2025 16:22
@WoosukKwon
Copy link
Collaborator

@mgoin thanks for your review. can you please take another look?

Copy link
Member

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks @pavanimajety!

@mgoin mgoin added quantization ready ONLY add when PR is ready to merge/full CI is needed labels Mar 11, 2025
@mgoin mgoin enabled auto-merge (squash) March 11, 2025 22:15
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
auto-merge was automatically disabled March 12, 2025 02:02

Head branch was pushed to by a user without write access

@WoosukKwon WoosukKwon enabled auto-merge (squash) March 12, 2025 02:28
@WoosukKwon WoosukKwon merged commit debd6bb into vllm-project:main Mar 12, 2025
57 checks passed
richardsliu pushed a commit to richardsliu/vllm that referenced this pull request Mar 14, 2025
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
Signed-off-by: Richard Liu <ricliu@google.com>
lulmer pushed a commit to lulmer/vllm that referenced this pull request Apr 7, 2025
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
Signed-off-by: Louis Ulmer <ulmerlouis@gmail.com>
shreyankg pushed a commit to shreyankg/vllm that referenced this pull request May 3, 2025
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
RichardoMrMu pushed a commit to RichardoMrMu/vllm that referenced this pull request May 12, 2025
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
Signed-off-by: Mu Huai <tianbowen.tbw@antgroup.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

quantization ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants